Self-Driving Car Engineer Nanodegree

Advanced Lane Finding Project By Abdelkoddous Khamsi

The goals / steps of this project are the following:

  • Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
  • Apply a distortion correction to raw images.
  • Use color transforms, gradients, etc., to create a thresholded binary image.
  • Apply a perspective transform to rectify binary image ("birds-eye view").
  • Detect lane pixels and fit to find the lane boundary.
  • Determine the curvature of the lane and vehicle position with respect to center.
  • Warp the detected lane boundaries back onto the original image.
  • Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.

First, I'll compute the camera calibration using chessboard images

In [1]:
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import math

%matplotlib qt

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)

# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.

# Make a list of calibration images
images = glob.glob('camera_cal/calibration*.jpg')
N = len(images)

print(images)
print(type(images))
print("Number of images ==>", N)

# Step through the list and search for chessboard corners
for fname in images:
    img = cv2.imread(fname)
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

    # Find the chessboard corners
    ret, corners = cv2.findChessboardCorners(gray, (9,6),None)

    # If found, add object points, image points
    if ret == True:
        objpoints.append(objp)
        imgpoints.append(corners)

        # Draw and display the corners
        img = cv2.drawChessboardCorners(img, (9,6), corners, ret)
        cv2.imshow('img',img)
        cv2.waitKey(1000)
    else:
        print("No corners found in ", fname)

cv2.destroyAllWindows()
['camera_cal\\calibration1.jpg', 'camera_cal\\calibration10.jpg', 'camera_cal\\calibration11.jpg', 'camera_cal\\calibration12.jpg', 'camera_cal\\calibration13.jpg', 'camera_cal\\calibration14.jpg', 'camera_cal\\calibration15.jpg', 'camera_cal\\calibration16.jpg', 'camera_cal\\calibration17.jpg', 'camera_cal\\calibration18.jpg', 'camera_cal\\calibration19.jpg', 'camera_cal\\calibration2.jpg', 'camera_cal\\calibration20.jpg', 'camera_cal\\calibration3.jpg', 'camera_cal\\calibration4.jpg', 'camera_cal\\calibration5.jpg', 'camera_cal\\calibration6.jpg', 'camera_cal\\calibration7.jpg', 'camera_cal\\calibration8.jpg', 'camera_cal\\calibration9.jpg']
<class 'list'>
Number of images ==> 20
No corners found in  camera_cal\calibration1.jpg
No corners found in  camera_cal\calibration4.jpg
No corners found in  camera_cal\calibration5.jpg
In [2]:
## Camera calibration given
%matplotlib inline

def cal_undistort(img, objpoints, imgpoints):
    # Use cv2.calibrateCamera() and cv2.undistort()
    cam_calib = cv2.calibrateCamera(objpoints, imgpoints, img.shape[:-1], None, None)
    undist = cv2.undistort(img, cam_calib[1], cam_calib[2])
    return undist

def PlotComparisonView(image_before, image_after, performedProcessing, colormap = (None, None)):
    """
    To avoid repeting ploting commands every time, will be useful throughout
    this notebook 
    """

    f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9))
    f.tight_layout()
    ax1.imshow(image_before, cmap = colormap[0])
    ax1.set_title('Original Image', fontsize=35)
    ax2.imshow(image_after, cmap = colormap[1])
    ax2.set_title(performedProcessing + ' Image', fontsize=35)
    plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
    plt.show()

img = cv2.imread(images[1])

print("objpoints array len", len( objpoints))
print("imgpoints array len", len( imgpoints))

undistorted_img = cal_undistort(img, objpoints, imgpoints)

PlotComparisonView(img, undistorted_img, 'Undistorted')
objpoints array len 17
imgpoints array len 17
In [3]:
# Visualizing before and after of Undistortion on some chessboard images
for i in range(13,16):
    
    img = cv2.imread(images[i])
    undistorted_img = cal_undistort(img, objpoints, imgpoints)
    
    PlotComparisonView(img, undistorted_img, 'Undistorted')

Then, I'll Apply this distortion correction to raw road images

In [4]:
import matplotlib.image as mpimg

# Visualizing before and after of Undistortion on real world application image
test_images = glob.glob('test_images/test*.jpg')

for i in range(1,4):
    img = mpimg.imread(test_images[i])
    undistorted_img = cal_undistort(img, objpoints, imgpoints)
    PlotComparisonView(img, undistorted_img, 'Undistorted Road')

I'll use Sobel operator gradients and Color transforms to create a thresholded binary image

  • Let's start playing with the sobel operator
In [5]:
def axis_gradient(img, ksize=9, thresh=(30,255)):
    ## x gradient

    # convert to gray scale
    #if channel == 'gray':
    #    gray_image = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
    #else:
    #    gray_image = img
    if len(img.shape) == 3:
        gray_image = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
    elif len(img.shape) ==2:
        gray_image = img
    
    height = img.shape[0]
    width = img.shape[1]
    # derivative along x
    sobelx = cv2.Sobel(gray_image, cv2.CV_64F, 1, 0, ksize)

    abs_sobel = np.absolute(sobelx)

    scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel))

    single_axis_binary = np.zeros_like(gray_image, dtype=np.uint8)
    #single_axis_binary = np.zeros((height,width,1), np.uint8)

    single_axis_binary[ (scaled_sobel >= thresh[0]) & (scaled_sobel <= thresh[1])] = 1

    return single_axis_binary#cv2.Canny(gray_image, 20, 50) #

def magnitude_gradient(img, ksize=9, thresh=(50,255),channel='gray'):
    ## mag gradient
    # convert to gray scale
    #if channel == 'gray':
    #    gray_image = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
    #else:
    #    gray_image = img
    if len(img.shape) == 3:
        gray_image = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
    elif len(img.shape) ==2:
        gray_image = img
        
    # derivative along x and y
    sobelx = cv2.Sobel(gray_image, cv2.CV_64F, 1, 0, ksize)
    sobely = cv2.Sobel(gray_image, cv2.CV_64F, 0, 1, ksize)
    sobelxy = np.sqrt(sobelx**2 + sobely**2)

    scaled_sobel = np.uint8(255*sobelxy/np.max(sobelxy))

    mag_binary = np.zeros_like(gray_image)
    mag_binary[ (scaled_sobel >= thresh[0]) & (scaled_sobel <= thresh[1])] = 1

    return mag_binary

def direction_gradient(img, ksize=9, thresh=(0.7,1.3),channel='gray'):
    ## direction gradient

    # convert to gray scale
    #if channel == 'gray':
    #    gray_image = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
    #else:
    #    gray_image = img
    if len(img.shape) == 3:
        gray_image = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
    elif len(img.shape) ==2:
        gray_image = img
        
    # derivative along x and y
    sobelx = cv2.Sobel(gray_image, cv2.CV_64F, 1, 0, ksize)
    sobely = cv2.Sobel(gray_image, cv2.CV_64F, 0, 1, ksize)
    
    abs_sobelx = np.absolute(sobelx)
    abs_sobely = np.absolute(sobely)

    sobel_direction = np.arctan2(abs_sobely, abs_sobelx)

    dir_binary = np.zeros_like(sobel_direction)
    dir_binary[ (sobel_direction >= thresh[0]) & (sobel_direction <= thresh[1])] = 1

    return dir_binary
In [6]:
image = mpimg.imread(test_images[1])

# Apply all three distinct gradient
axis_binary_output = axis_gradient(image)
mag_binary_output = magnitude_gradient(image)
dir_binary_output = direction_gradient(image)


PlotComparisonView(image, axis_binary_output, 'Thresholded x-axis binary', (None, 'gray'))
PlotComparisonView(image, mag_binary_output, 'Thresholded Magnitude binary', (None, 'gray'))
PlotComparisonView(image, dir_binary_output, 'Thresholded Direction binary', (None, 'gray'))
In [7]:
# Play with threshold combination
combined_binary = np.zeros_like(axis_binary_output)
combined_binary[ (axis_binary_output==1) | (mag_binary_output*dir_binary_output == 1) ] = 1

PlotComparisonView(image, combined_binary, 'Combined Threshold binary', (None, 'gray'))
In [8]:
def combinedGradient_thresholdedBinary(img):
    
    #print("Image shape ==>",img.shape)
    axis_binary_output = axis_gradient(img)
    mag_binary_output = magnitude_gradient(img)
    dir_binary_output = direction_gradient(img)
    
    combined_binary = np.zeros_like(axis_binary_output)
    combined_binary[ (axis_binary_output==1) | (mag_binary_output*dir_binary_output == 1) ] = 1

    return combined_binary
In [9]:
# Check on all the test images
for i in range ( len(test_images) ):
    
    img = mpimg.imread(test_images[i])
    
    combined_threshold = combinedGradient_thresholdedBinary(img)
    
    PlotComparisonView(img, combined_threshold, 'Combined Thresholds ', (None, 'gray'))
    
    #cv2.imwrite("output_images/write_test"+str(i)+".jpg", combined_threshold)

    
  • Let's start playing with color spaces to solve the issues of diffirent colorations of th road we can see for example in the first image.
In [10]:
def to_HLS_Color_Channel(img, L_thresh=(200,255), S_thresh=(200,255)):
    
    hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
    
    H_thresh = (10,40)
    H = hls[:,:,0]
    L = hls[:,:,1]
    S = hls[:,:,2]
    
    S_binary = np.zeros_like(S)
    L_binary = np.zeros_like(L)
    H_binary = np.zeros_like(H)

#    S_binary[ ( ( L >= L_thresh[0] ) & ( L <= L_thresh[1] ) ) 
#            | ( ( S >= S_thresh[0]) & (S <= S_thresh[1] ) ) ] = 1

    L_binary[ (L >= L_thresh[0]) & (L <= L_thresh[1]) ] = 1 # to extract white lanes
    S_binary[ (S >= S_thresh[0]) & (S <= S_thresh[1]) ] = 1
    H_binary[ (H >= H_thresh[0]) & (H <= H_thresh[1]) ] = 1
    
    return L_binary | S_binary
In [11]:
image = mpimg.imread(test_images[0])
#s_binary = to_S_Color_Channel(image,(190,255))
s_binary = to_HLS_Color_Channel(image)

PlotComparisonView(image, s_binary, 'HLS Channel ', (None, 'gray'))
In [12]:
# Check the color transform and the combined sobel gradients on all the test images
for i in range ( len(test_images) ):
    
    img = mpimg.imread(test_images[i])
    
    s_binary = to_HLS_Color_Channel(img)
    
    combined_threshold = combinedGradient_thresholdedBinary(s_binary)
    
    PlotComparisonView(img, combined_threshold, 'Combined Thresholds ', (None, 'gray'))

I'll use now a perspective transform to rectify binary image ("bird's eye view")

In [13]:
def drawConvexPolygonEdgesVertices(img, pts, edge = True, vertex = True):
    """
    pts: a sequence of pts that create a convex polygon
    """
    col = [255, 0, 0]
    thick = 3
    rad = 15
    
    img_copy = img.copy() # avoid img_copy = img as both images will be modified 
    
    if vertex == True:
        # draw vertices
        for pt in pts:
            cv2.circle(img_copy, pt, rad, col, -thick, lineType=8, shift=0)
    
   
    if edge == True:
        # draw edges
        N = len(pts)
        for i in range(N):
            
            cv2.line(img_copy, pts[ i ], pts[ (i+1) % N ], col, thick)
    
    return img_copy
In [14]:
%matplotlib qt
image = mpimg.imread(test_images[2])
image = cal_undistort(image, objpoints, imgpoints)
img_size = ( image.shape[1], image.shape[0] )

# Manually selecting the cordinnates for a src points of RoI
src_p1 = (316,660)  # bottom left
src_p2 = (1050,660) # bottom right
src_p3 = (762,480)  # top right
src_p4 = (578,480)  # top left

src = np.float32([ src_p1, src_p2, src_p3, src_p4 ])

image_RoI = drawConvexPolygonEdgesVertices(image, [ src_p1, src_p2, src_p3, src_p4 ])

plt.imshow(image_RoI)
plt.show()
In [15]:
%matplotlib inline

height = image.shape[0]
width = image.shape[1]
# Manually selecting the cordinnates for a dst points of RoI
# Might need  a more reliable way to select src and dst points

dst_p1 = (300, height - 30)       # bottom left
dst_p2 = (width-330, height - 30) # bottom right
dst_p3 = (width - 250, 250)         # top right
dst_p4 = (300+70, 250)               # top left

dst = np.float32([ dst_p1, dst_p2, dst_p3, dst_p4])

# generate perspective transofrmation matrix given source and destination points
M = cv2.getPerspectiveTransform(src, dst)

# perform perspective transformation
warped = cv2.warpPerspective(image, M, img_size, flags=cv2.INTER_LINEAR)

warped_RoI = drawConvexPolygonEdgesVertices(warped, [ dst_p1, dst_p2, dst_p3, dst_p4])

PlotComparisonView(image_RoI, warped_RoI, 'Perspective transformed ', colormap = (None, None))
In [16]:
# Check result for all test images
for i in range ( len(test_images) ):
    
    img = mpimg.imread(test_images[i])
    undistort_img = cal_undistort(img, objpoints, imgpoints)

    warped_img = cv2.warpPerspective(undistort_img, M, img_size, flags=cv2.INTER_LINEAR)

    PlotComparisonView(undistort_img, warped_img, 'Perspective transformed ', (None, 'gray'))
In [17]:
# Check perspective transfom result for all test binary images
test_warped_imgs = []
for i in range ( len(test_images) ):
    
    img = mpimg.imread(test_images[i])
    undistort_img = cal_undistort(img, objpoints, imgpoints)

    
    s_binary = to_HLS_Color_Channel(img)
    
    combined_threshold = combinedGradient_thresholdedBinary(s_binary)
    
    warped_img = cv2.warpPerspective(combined_threshold, M, img_size, flags=cv2.INTER_LINEAR)
    test_warped_imgs += [warped_img]

    PlotComparisonView(combined_threshold, warped_img, 'Perspective transformed ', ('gray', 'gray'))

Detect lane pixels and fit to find the lane boundary.

In [18]:
# reuse histogram from course material
def hist(img):
    # TO-DO: Grab only the bottom half of the image
    # Lane lines are likely to be mostly vertical nearest to the car

    height = img.shape[0]
    bottom_half = img[ height//2:,: ]

    # TO-DO: Sum across image pixels vertically - make sure to set `axis`
    # i.e. the highest areas of vertical lines should be larger values
    histogram = np.sum( bottom_half, axis=0)
    
    return histogram
In [19]:
for i in range(6):
    # Create histogram of image binary activations
    histogram = hist(test_warped_imgs[i])

    # Visualize the resulting histogram
    fig = plt.figure(1, figsize=(10, 4))
    plt.title("Random title")
    plt.grid()
    plt.plot(histogram)
    plt.show()

I'll detect now lane pixels on the bird's eye view using sliding windows approach and fit 2nd order polynomial for the lane.

In [20]:
def detectLanePixels(test_im, stats=True):
    
    # HYPERPARAMETERS
    # Choose the number of sliding windows
    nwindows = 9
    # Set the width of the windows +/- margin
    margin = 100
    # Set minimum number of pixels found to recenter window
    minpix = 50
    
    # Make sure image values are in [0, 255] interval
    if np.max(test_im) == 1:
        test_im = np.uint8(255*test_im)
    
    # Create a histogram for the second half of the image
    histogram = hist(test_im)

    # Create output image to draw on and visualize result
    out_img = np.dstack((test_im, test_im, test_im))

    # peak of left and right part of the histogram are starting points of left and right lanes
    midpoint = np.int(histogram.shape[0]//2)
    leftxbase = np.argmax(histogram[:midpoint])
    rightxbase = np.argmax(histogram[midpoint:]) + midpoint


    # Set height of windows - based on nwindows above and image shape
    window_height = np.int(test_im.shape[0]//nwindows)

    # Identify the x and y positions of all nonzero pixels in the image
    nonzero = test_im.nonzero() # nonzero is a 2-elements tuple
    nonzeroy = np.array(nonzero[0]) # first element is an array of y coordinates of non-zero pixels
    nonzerox = np.array(nonzero[1]) # second element is an array of x coordinates of non-zero pixels

    # Current positions to be updated later for each window in nwindows
    leftx_current = leftxbase
    rightx_current = rightxbase

    # Create empty lists to receive left and right lane pixel indices
    left_lane_inds = []
    right_lane_inds = []
    
    # This list is only for a general stat purpose
    activatedPixels_perWindow = []

    # Step through the windows one by one
    for window in range(nwindows):
        # Identify window boundaries in x and y (and right and left)
        win_y_low = test_im.shape[0] - (window+1)*window_height
        win_y_high = test_im.shape[0] - window*window_height

        #Find the four boundaries of the left & right windows on x axis 
        win_xleft_low = leftx_current - margin  
        win_xleft_high = leftx_current + margin 
        win_xright_low = rightx_current - margin  
        win_xright_high = rightx_current + margin

        #Identify the nonzero pixels in x and y within the window
        left_nonzero_pixels = ( (nonzerox > win_xleft_low) & (nonzerox < win_xleft_high)
                               & (nonzeroy > win_y_low) & (nonzeroy < win_y_high) ).nonzero()[0]

        right_nonzero_pixels = ( (nonzerox > win_xright_low) & (nonzerox < win_xright_high)
                               & (nonzeroy > win_y_low) & (nonzeroy < win_y_high) ).nonzero()[0]

        #print(len(left_nonzero_pixels), len(right_nonzero_pixels)) # How many non zero pixels indices are on 
                                                                   # the left & right line in current window
        activatedPixels_perWindow.insert(0,[len(left_nonzero_pixels), len(right_nonzero_pixels)])    
        
        # add the result to the one from previous windows
        left_lane_inds.append(left_nonzero_pixels) 
        right_lane_inds.append(right_nonzero_pixels) 

        # Draw the windows on the visualization image
        cv2.rectangle(out_img,(win_xleft_low,win_y_low), (win_xleft_high,win_y_high),(0,255,0), 5) 
        cv2.rectangle(out_img,(win_xright_low,win_y_low),(win_xright_high,win_y_high),(0,255,0), 5)

        # recalculating the base x-coordinates of left and right windows
        if (len(left_nonzero_pixels) > minpix):
            # Position of activated pixels in current left window
            leftx_pixelpos_current = nonzerox[left_nonzero_pixels]
            leftx_current = np.int(np.mean( leftx_pixelpos_current, axis=0) )
            #print("new left base is ==>", leftx_current)

        if ( len( right_nonzero_pixels ) > minpix):
            # Position of activated pixels in current right window
            rightx_pixelpos_current = nonzerox[right_nonzero_pixels]
            rightx_current = np.int(np.mean( rightx_pixelpos_current, axis=0) )
            #print("new right base is ==>", rightx_current)

    # reorganise indices across all windows
    left_lane_inds = np.concatenate( left_lane_inds )
    right_lane_inds = np.concatenate( right_lane_inds )
    
    if stats == True:
        print("leftxbase : ", leftxbase, 'rightxbase : ', rightxbase)
        print("Number of non zero pixels are in the whole image", nonzero[0].shape) #how many non zero pixels are in the whole image
        print("Number of non zero pixels indices on the left lane", left_lane_inds.shape) # how many non zero pixels indices are on the left line
        print("Number of non zero pixels indices on the right lane", right_lane_inds.shape) # how many non zero pixels indices are on the right line
        print("Number of activated pixels per window", np.array(activatedPixels_perWindow) )

    # get pixel position from pixel indices
    leftx_pos = nonzerox[left_lane_inds]
    lefty_pos = nonzeroy[left_lane_inds]
    rightx_pos = nonzerox[right_lane_inds]
    righty_pos = nonzeroy[right_lane_inds]

    out_img[ lefty_pos, leftx_pos] = [255, 0, 0]
    out_img[ righty_pos, rightx_pos] = [0, 0, 255]
    
    return leftx_pos, lefty_pos, rightx_pos, righty_pos, out_img
In [21]:
# Apply on all test images
for i in range(5):
    test_img = test_warped_imgs[i]

    lane_pixel_img = detectLanePixels(test_img, stats=False)[4]

    plt.imshow(lane_pixel_img)
    plt.show()

Let's fit a second order polymomial into the previous results:

In [22]:
def fit_polynomial_toLane(image, xm_ym_per_pix = (3.7/700, 30/720), plot_poly=True):
    
    # fit a polynomial into the detected pixels so far
    test_img = test_warped_imgs[5]
    leftx, lefty, rightx, righty, lanePixel_im = detectLanePixels(image, stats=False)

    # fitting into a 2nd order polynomial x = f(y)
    #leftfit = np.polyfit( lefty*xm_ym_per_pix[1], leftx*xm_ym_per_pix[0], 2 )
    #rightfit = np.polyfit( righty*xm_ym_per_pix[1], rightx*xm_ym_per_pix[0], 2)

    leftfit = np.polyfit( lefty, leftx, 2 )
    rightfit = np.polyfit( righty, rightx, 2)
    #print("leftfit", leftfit, "rightfit", rightfit)
    # Use the fitting parameters to plot the polynomial on the image
    ## generate y axis base 
    ploty = np.linspace(0, image.shape[0]-1, image.shape[0]).astype(int)
    #ploty = ploty*xm_ym_per_pix[1]
    
    ## compute leftx = f_left(y) & rightx = f_right(y)
    plot_leftx = ( leftfit[0]*ploty**2 + leftfit[1]*ploty + leftfit[2] ).astype(int)
    plot_rightx = ( rightfit[0]*ploty**2 + rightfit[1]*ploty + rightfit[2] ).astype(int)
    
    # plot left and right fitting on lanePixel_im
    #if plot_poly == True:
    #plt.plot( plot_leftx, ploty, color='yellow' )
    #plt.plot( plot_rightx, ploty, color='yellow')
    
    #print(ploty)
    #print(plot_rightx)
    for paint in range(-5,5):
        try:
            lanePixel_im[ ploty, plot_leftx+paint] = [255, 0, 0] # will be painting only a single pixel per y value => not very visible
            lanePixel_im[ ploty, plot_rightx+paint] = [255, 0, 0]
        except:
            break
    #return leftfit, rightfit, lanePixel_im
    return leftfit, rightfit, lanePixel_im
In [23]:
# Let's run on all 6 images
for i in range(6):
    
    current_test_img = test_warped_imgs[i]

    current_out_img = fit_polynomial_toLane(current_test_img)[2]
        
    plt.imshow(current_out_img)
    #cv2.imwrite("output_images/polyfit"+str(i)+".jpg", current_out_img)
    plt.show()

Compute curvature of the fitted polynomial

In [24]:
def compute_laneCurvature_vehiculeOffset(test_im):
    
    # introducing the conversion from pixel world to real scale meter world
    xm_per_pix = 3.7/700 # a rigourous reason should be given here?
    ym_per_pix = 30/720
    width = image.shape[1]

    left_fit, right_fit, polyfit_im = fit_polynomial_toLane(test_im, (xm_per_pix,ym_per_pix), plot_poly=True)
    plt.imshow(polyfit_im)

    ploty = np.linspace(0, test_im.shape[0]-1, test_im.shape[0])

    # Select the point where the curvature is calculated
    y_eval = np.max(ploty)

    a_l = left_fit[0]
    b_l = left_fit[1]
    a_r = right_fit[0]
    b_r = right_fit[1]

    # compute left and right curvature using formula from class material
    left_curverad = np.power(1 + (2*left_fit[0]*y_eval+left_fit[1])**2, 3/2)/abs(2*left_fit[0])  
    right_curverad = np.power(1 + (2*right_fit[0]*y_eval+right_fit[1])**2, 3/2)/abs(2*right_fit[0]) 

    print("left_lane_curvature", left_curverad, "right_lane_curvature", right_curverad)
    
    ########## Vehicle position ###########
    
    ## compute leftx = f_left(y) & rightx = f_right(y)
    plot_leftx = ( left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] ).astype(int)
    plot_rightx = ( right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2] ).astype(int)
    
    vehicle_pos = ( plot_rightx[-1] +  plot_leftx[-1] )//2 # midpoint of the two lines base

    vehicle_offset = xm_per_pix*( vehicle_pos - width//2 )

    #print("Vehicle position -->", vehicle_pos)

    print("Vehicle offset -->", vehicle_offset)

    return left_curverad, right_curverad, vehicle_offset

test_img = test_warped_imgs[1]
print( compute_laneCurvature_vehiculeOffset( test_img ) )
left_lane_curvature 2399.839279967828 right_lane_curvature 3635.939618807656
Vehicle offset --> 0.1162857142857143
(2399.839279967828, 3635.939618807656, 0.1162857142857143)

Project images back into original perspective

In [25]:
# perspective transform inverse

# generate perspective transofrmation matrix given source and destination points
Minv = cv2.getPerspectiveTransform(dst, src)

warped_imgs = [] # inversed perspective images

for i in range (6):
    test_img = test_warped_imgs[i]
    lane_pixel_img = fit_polynomial_toLane(test_img)[2]

    # perform perspective transformation
    warped = cv2.warpPerspective(lane_pixel_img, Minv, img_size, flags=cv2.INTER_LINEAR)
    warped_imgs += [warped]
    
    PlotComparisonView(lane_pixel_img, warped, 'inverse perspective', colormap = (None, None))

    #plt.imshow(warped)
    #plt.show()

Draw sliding windows result on initial input images

In [26]:
combined_imgs = []

for i in range(5):
    
    warped = warped_imgs[i]

    img = mpimg.imread(test_images[i])

    combined_img = warped | img
    combined_imgs += [combined_img]

    PlotComparisonView(warped, combined_img, 'Final perspective', colormap = (None, None))
    #cv2.imwrite("output_images/finalPerspective"+str(i)+".jpg", cv2.cvtColor(combined_img, cv2.COLOR_RGB2BGR))

The Lane Finding full pipeline for still images:

In [27]:
def advancedLaneFindingPipeline(input_img):
    
    Pipeline = []
    
    # undistort input image
    undistort_img = cal_undistort(input_img, objpoints, imgpoints)
    Pipeline += [undistort_img]

    # create a binary from HLS color space
    hls_binary = to_HLS_Color_Channel(undistort_img)
    Pipeline += [hls_binary]
    
    # create a binary using axis, magnitude and direction gradient on hls_binary
    combined_threshold = combinedGradient_thresholdedBinary(hls_binary)
    Pipeline += [combined_threshold]
    
    # perform perspective transform to bird-eye view
    warped_img = cv2.warpPerspective(combined_threshold, M, img_size, flags=cv2.INTER_LINEAR)
    Pipeline += [warped_img]
    
    # detect lane pixel on bird-eye view image and generate fitting polynomial
    lane_pixel_img = fit_polynomial_toLane(warped_img)[2]
    Pipeline += [lane_pixel_img]
    current_fit = fit_polynomial_toLane(warped_img)[:2]
    print(current_fit)

    # perform inverse perspective transformation from bird-eye view
    unwarped_img = cv2.warpPerspective(lane_pixel_img, Minv, img_size, flags=cv2.INTER_LINEAR)
    Pipeline += [unwarped_img]
    
    # draw result on input_img
    finalresult_img = unwarped_img | undistort_img
    Pipeline += [finalresult_img]
    
    #PlotComparisonView(input_img, finalresult_img, 'Final perspective', colormap = (None, None))
    
    # save result into output_images folder
    #cv2.imwrite("output_images/finalPerspective"+str(i)+".jpg", cv2.cvtColor(combined_img, cv2.COLOR_RGB2BGR))
    
    
    # save result into output_images folder
    #for i in range( len(Pipeline)):
    #    if 1<= i <=3:
    #        cv2.imwrite("output_images/SingleImgs"+str(i)+".jpg", 255*Pipeline[i])
    #    else:
    #        cv2.imwrite("output_images/SingleImgs"+str(i)+".jpg", cv2.cvtColor(Pipeline[i], cv2.COLOR_RGB2BGR))

    
    return finalresult_img

Running full pipeline on test images

In [28]:
for i in range(6):    
    my_img = mpimg.imread(test_images[i])
    final = advancedLaneFindingPipeline(my_img)
(array([ 1.44483407e-04, -1.85988807e-01,  3.67378032e+02]), array([ 7.86749197e-05, -2.05128931e-01,  1.05672169e+03]))
(array([-2.07508988e-04,  2.89371359e-01,  2.37119575e+02]), array([-3.85815184e-05,  1.83403975e-01,  8.90526832e+02]))
(array([ 6.18629678e-05, -2.22803581e-01,  4.22955832e+02]), array([ 1.27099636e-04, -2.64287294e-01,  1.06585834e+03]))
(array([ 6.86260490e-05, -1.16409990e-01,  3.58811198e+02]), array([ 4.40502225e-04, -4.17506412e-01,  1.07814802e+03]))
(array([ 3.75879036e-04, -4.24751961e-01,  3.72873739e+02]), array([ 8.86968426e-05, -2.10078059e-01,  1.04409158e+03]))
(array([ 1.27110106e-04, -2.87349842e-01,  4.54707032e+02]), array([ 2.92852926e-04, -3.57884806e-01,  1.10379368e+03]))

Video Pipeline

First, Let's try this pipeline on the initial video stream as naive "independant" images

In [29]:
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
In [30]:
def process_image(image):
    # NOTE: The output you return should be a color image (3 channel) for processing video below
    # TODO: put your pipeline here,
    result = advancedLaneFindingPipeline(image)
    # you should return the final output (image where lines are drawn on lanes)

    return result
In [31]:
video_output = 'output_videos/project_video_notFinal.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
clip1 = VideoFileClip("project_video.mp4").subclip(0, 2)
print(type(clip1))
#clip1 = VideoFileClip("project_video.mp4")

white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(video_output, audio=False)
<class 'moviepy.video.io.VideoFileClip.VideoFileClip'>
(array([-1.88608172e-04,  2.71122790e-01,  2.38338530e+02]), array([-3.49962672e-05,  1.44431976e-01,  9.09043012e+02]))
Moviepy - Building video output_videos/project_video_notFinal.mp4.
Moviepy - Writing video output_videos/project_video_notFinal.mp4

t:   0%|                                                                              | 0/50 [00:00<?, ?it/s, now=None]
(array([-1.88608172e-04,  2.71122790e-01,  2.38338530e+02]), array([-3.49962672e-05,  1.44431976e-01,  9.09043012e+02]))
t:   4%|██▊                                                                   | 2/50 [00:01<00:38,  1.25it/s, now=None]
(array([-1.99051163e-04,  2.80889357e-01,  2.37117560e+02]), array([-9.04143749e-05,  1.87284630e-01,  8.99178790e+02]))
t:   6%|████▏                                                                 | 3/50 [00:03<00:47,  1.01s/it, now=None]
(array([-2.05452211e-04,  2.87983340e-01,  2.36291863e+02]), array([-6.03976561e-05,  1.63922885e-01,  9.00718405e+02]))
t:   8%|█████▌                                                                | 4/50 [00:04<00:56,  1.24s/it, now=None]
(array([-2.24178356e-04,  3.02383742e-01,  2.34636812e+02]), array([-1.70984514e-04,  2.68260741e-01,  8.77393635e+02]))
t:  10%|███████                                                               | 5/50 [00:06<00:59,  1.33s/it, now=None]
(array([-2.25486602e-04,  3.07825347e-01,  2.32341496e+02]), array([-2.36583876e-04,  3.34206373e-01,  8.62078089e+02]))
t:  12%|████████▍                                                             | 6/50 [00:07<01:01,  1.40s/it, now=None]
(array([-2.28686734e-04,  3.14355601e-01,  2.31047268e+02]), array([-2.05925836e-04,  3.17013410e-01,  8.60148008e+02]))
t:  14%|█████████▊                                                            | 7/50 [00:09<01:01,  1.44s/it, now=None]
(array([-2.24320968e-04,  3.18152856e-01,  2.27681153e+02]), array([-1.84863949e-04,  2.99282504e-01,  8.64421204e+02]))
t:  16%|███████████▏                                                          | 8/50 [00:11<01:01,  1.46s/it, now=None]
(array([-2.21260342e-04,  3.20262914e-01,  2.27166886e+02]), array([-2.26728369e-05,  2.21533781e-01,  8.64189956e+02]))
t:  18%|████████████▌                                                         | 9/50 [00:12<01:00,  1.48s/it, now=None]
(array([-2.22845158e-04,  3.26073477e-01,  2.22829324e+02]), array([4.51577102e-05, 1.69635230e-01, 8.74449077e+02]))
t:  20%|█████████████▊                                                       | 10/50 [00:14<01:01,  1.54s/it, now=None]
(array([-2.14644250e-04,  3.24296480e-01,  2.20248142e+02]), array([1.26409641e-04, 1.15003369e-01, 8.80013443e+02]))
t:  22%|███████████████▏                                                     | 11/50 [00:15<01:01,  1.58s/it, now=None]
(array([-2.16860182e-04,  3.28090626e-01,  2.18441439e+02]), array([1.15204064e-04, 1.16149961e-01, 8.76849396e+02]))
t:  24%|████████████████▌                                                    | 12/50 [00:17<00:59,  1.57s/it, now=None]
(array([-1.85831518e-04,  3.12369647e-01,  2.13566860e+02]), array([8.79710510e-05, 9.95434647e-02, 8.95602180e+02]))
t:  26%|█████████████████▉                                                   | 13/50 [00:18<00:57,  1.55s/it, now=None]
(array([-1.86703053e-04,  3.16191545e-01,  2.11669425e+02]), array([1.83354923e-05, 1.47868447e-01, 8.85684859e+02]))
t:  28%|███████████████████▎                                                 | 14/50 [00:20<00:55,  1.54s/it, now=None]
(array([-1.74434483e-04,  3.08476836e-01,  2.11734373e+02]), array([4.00265969e-05, 1.24933934e-01, 8.88449451e+02]))
t:  30%|████████████████████▋                                                | 15/50 [00:21<00:53,  1.53s/it, now=None]
(array([-1.61273325e-04,  2.97111544e-01,  2.15447917e+02]), array([8.17553959e-05, 9.71147798e-02, 8.83560755e+02]))
t:  32%|██████████████████████                                               | 16/50 [00:23<00:51,  1.52s/it, now=None]
(array([-1.45568641e-04,  2.80116776e-01,  2.19945102e+02]), array([-1.37238463e-04,  2.92059081e-01,  8.39873757e+02]))
t:  34%|███████████████████████▍                                             | 17/50 [00:24<00:50,  1.52s/it, now=None]
(array([-1.39714831e-04,  2.74997865e-01,  2.19617199e+02]), array([-1.77304550e-04,  3.32596373e-01,  8.28682239e+02]))
t:  36%|████████████████████████▊                                            | 18/50 [00:26<00:48,  1.51s/it, now=None]
(array([-1.25467773e-04,  2.63539545e-01,  2.21217811e+02]), array([-1.52872466e-04,  3.16931012e-01,  8.26860991e+02]))
t:  38%|██████████████████████████▏                                          | 19/50 [00:27<00:46,  1.51s/it, now=None]
(array([-1.17843626e-04,  2.60644489e-01,  2.18340919e+02]), array([-1.33204664e-04,  2.86347390e-01,  8.39214095e+02]))
t:  40%|███████████████████████████▌                                         | 20/50 [00:29<00:45,  1.50s/it, now=None]
(array([-1.11203766e-04,  2.57388043e-01,  2.16179293e+02]), array([-1.33627999e-04,  2.78887048e-01,  8.44046224e+02]))
t:  42%|████████████████████████████▉                                        | 21/50 [00:31<00:43,  1.51s/it, now=None]
(array([-1.01566319e-04,  2.51478937e-01,  2.14362049e+02]), array([-1.01597298e-04,  2.46903091e-01,  8.55648212e+02]))
t:  44%|██████████████████████████████▎                                      | 22/50 [00:32<00:42,  1.50s/it, now=None]
(array([-9.68867085e-05,  2.49794558e-01,  2.11064751e+02]), array([-4.95864828e-05,  2.03524336e-01,  8.68121641e+02]))
t:  46%|███████████████████████████████▋                                     | 23/50 [00:34<00:40,  1.51s/it, now=None]
(array([-9.39879893e-05,  2.45779065e-01,  2.09185792e+02]), array([3.90957313e-05, 1.37090323e-01, 8.81414698e+02]))
t:  48%|█████████████████████████████████                                    | 24/50 [00:35<00:39,  1.51s/it, now=None]
(array([-1.09197773e-04,  2.50657813e-01,  2.11074961e+02]), array([-1.51795323e-05,  1.69021032e-01,  8.74752649e+02]))
t:  50%|██████████████████████████████████▌                                  | 25/50 [00:37<00:37,  1.51s/it, now=None]
(array([-1.25791533e-04,  2.55405470e-01,  2.13097170e+02]), array([-2.36260816e-05,  1.70974054e-01,  8.73556372e+02]))
t:  52%|███████████████████████████████████▉                                 | 26/50 [00:38<00:36,  1.51s/it, now=None]
(array([-1.23729634e-04,  2.54294738e-01,  2.10420894e+02]), array([3.78967901e-05, 1.08801122e-01, 8.92042117e+02]))
t:  54%|█████████████████████████████████████▎                               | 27/50 [00:40<00:34,  1.51s/it, now=None]
(array([-1.38517583e-04,  2.61334903e-01,  2.10111517e+02]), array([-6.09512025e-05,  1.73140801e-01,  8.87148146e+02]))
t:  56%|██████████████████████████████████████▋                              | 28/50 [00:41<00:33,  1.51s/it, now=None]
(array([-1.31692781e-04,  2.58444401e-01,  2.07431357e+02]), array([-4.92435029e-05,  1.43832954e-01,  9.03315735e+02]))
t:  58%|████████████████████████████████████████                             | 29/50 [00:43<00:31,  1.51s/it, now=None]
(array([-1.33078184e-04,  2.53259258e-01,  2.11848418e+02]), array([-3.80487312e-05,  1.37818189e-01,  9.00862455e+02]))
t:  60%|█████████████████████████████████████████▍                           | 30/50 [00:44<00:30,  1.53s/it, now=None]
(array([-1.75246281e-04,  2.78624201e-01,  2.11991695e+02]), array([-1.10075615e-04,  2.09942426e-01,  8.81592880e+02]))
t:  62%|██████████████████████████████████████████▊                          | 31/50 [00:46<00:29,  1.53s/it, now=None]
(array([-1.87718756e-04,  2.83521127e-01,  2.15421865e+02]), array([-2.40235685e-04,  3.47286006e-01,  8.47431121e+02]))
t:  64%|████████████████████████████████████████████▏                        | 32/50 [00:47<00:27,  1.52s/it, now=None]
(array([-2.05214289e-04,  2.91077291e-01,  2.18840780e+02]), array([-1.90389299e-04,  3.15071635e-01,  8.45205483e+02]))
t:  66%|█████████████████████████████████████████████▌                       | 33/50 [00:49<00:25,  1.51s/it, now=None]
(array([-2.06274097e-04,  2.94683695e-01,  2.16259414e+02]), array([-1.57389030e-04,  2.87707068e-01,  8.54895874e+02]))
t:  68%|██████████████████████████████████████████████▉                      | 34/50 [00:50<00:24,  1.51s/it, now=None]
(array([-2.19059227e-04,  3.04726037e-01,  2.16623063e+02]), array([-2.46477012e-05,  2.24669297e-01,  8.54718341e+02]))
t:  70%|████████████████████████████████████████████████▎                    | 35/50 [00:52<00:22,  1.52s/it, now=None]
(array([-2.29345411e-04,  3.12753297e-01,  2.16797856e+02]), array([3.57558806e-05, 1.82726912e-01, 8.59359053e+02]))
t:  72%|█████████████████████████████████████████████████▋                   | 36/50 [00:53<00:21,  1.52s/it, now=None]
(array([-2.37130387e-04,  3.21413486e-01,  2.16902189e+02]), array([1.26272373e-04, 1.24281676e-01, 8.61656535e+02]))
t:  74%|███████████████████████████████████████████████████                  | 37/50 [00:55<00:20,  1.56s/it, now=None]
(array([-2.42256920e-04,  3.27396733e-01,  2.16129584e+02]), array([-2.09751974e-05,  2.04586703e-01,  8.54453419e+02]))
t:  76%|████████████████████████████████████████████████████▍                | 38/50 [00:56<00:18,  1.54s/it, now=None]
(array([-2.60377207e-04,  3.43656367e-01,  2.14533018e+02]), array([1.21370858e-04, 1.01541925e-01, 8.65918523e+02]))
t:  78%|█████████████████████████████████████████████████████▊               | 39/50 [00:58<00:16,  1.53s/it, now=None]
(array([-2.68841011e-04,  3.51110544e-01,  2.14180664e+02]), array([-2.47645724e-05,  2.05911716e-01,  8.47914147e+02]))
t:  80%|███████████████████████████████████████████████████████▏             | 40/50 [00:59<00:15,  1.54s/it, now=None]
(array([-2.69095290e-04,  3.57541519e-01,  2.10666607e+02]), array([-2.34858770e-05,  1.91912032e-01,  8.54682381e+02]))
t:  82%|████████████████████████████████████████████████████████▌            | 41/50 [01:01<00:13,  1.54s/it, now=None]
(array([-2.68666989e-04,  3.60301384e-01,  2.09447408e+02]), array([-1.29169212e-04,  2.78110383e-01,  8.40132496e+02]))
t:  84%|█████████████████████████████████████████████████████████▉           | 42/50 [01:02<00:12,  1.53s/it, now=None]
(array([-2.78150606e-04,  3.71206122e-01,  2.06587690e+02]), array([-2.43581027e-04,  3.85213570e-01,  8.18853425e+02]))
t:  86%|███████████████████████████████████████████████████████████▎         | 43/50 [01:04<00:10,  1.52s/it, now=None]
(array([-2.76808555e-04,  3.76078229e-01,  2.03755811e+02]), array([-2.32899922e-04,  3.70356894e-01,  8.25751759e+02]))
t:  88%|████████████████████████████████████████████████████████████▋        | 44/50 [01:06<00:09,  1.53s/it, now=None]
(array([-2.61870426e-04,  3.69949458e-01,  2.01788503e+02]), array([-2.14170155e-04,  3.46150692e-01,  8.35256395e+02]))
t:  90%|██████████████████████████████████████████████████████████████       | 45/50 [01:07<00:07,  1.53s/it, now=None]
(array([-2.64917306e-04,  3.77792963e-01,  1.98036945e+02]), array([-1.79941650e-04,  3.12549887e-01,  8.45430327e+02]))
t:  92%|███████████████████████████████████████████████████████████████▍     | 46/50 [01:09<00:06,  1.52s/it, now=None]
(array([-2.54924414e-04,  3.72338012e-01,  1.97330855e+02]), array([-1.29666839e-04,  2.94002189e-01,  8.44410292e+02]))
t:  94%|████████████████████████████████████████████████████████████████▊    | 47/50 [01:10<00:04,  1.51s/it, now=None]
(array([-2.45300937e-04,  3.69821625e-01,  1.94356495e+02]), array([-6.92531662e-05,  2.51704974e-01,  8.51541473e+02]))
t:  96%|██████████████████████████████████████████████████████████████████▏  | 48/50 [01:12<00:03,  1.51s/it, now=None]
(array([-2.49255831e-04,  3.76072730e-01,  1.92569245e+02]), array([1.34062890e-06, 2.14119934e-01, 8.50694815e+02]))
t:  98%|███████████████████████████████████████████████████████████████████▌ | 49/50 [01:13<00:01,  1.51s/it, now=None]
(array([-2.26696327e-04,  3.65015075e-01,  1.90044155e+02]), array([4.22212735e-05, 1.64417179e-01, 8.64908077e+02]))
t: 100%|█████████████████████████████████████████████████████████████████████| 50/50 [01:15<00:00,  1.51s/it, now=None]
(array([-2.15430030e-04,  3.58640058e-01,  1.89834349e+02]), array([7.06690242e-05, 1.41276057e-01, 8.66620982e+02]))
                                                                                                                       
Moviepy - Done !
Moviepy - video ready output_videos/project_video_notFinal.mp4
Wall time: 1min 18s
In [32]:
HTML("""
<video width="960" height="540" controls>
  <source src="{0}">
</video>
""".format(video_output))
Out[32]:

Let's make some improvements to this naive appraoch

The main two points that will be adressed are:

  • Avoiding blind searching in every new frame: searching for the new lines around the area where the previous ones were found.
  • Smoothing the detected lines across the frames using a moving average for the last n detected good polynomial fits. In order to adress these points we will need to keep track of some characteristics of each line acroos the different frames. For this reason a Class Line() is defined with the attributes we want to keep track of along the video.
In [33]:
# Define a class to receive the characteristics of each line detection
class Line():
    def __init__(self):
        # was the line detected in the last iteration?
        self.detected = False  
        
        # x values of the last n fits of the line
        self.recent_xfitted = []
        
        #average x values of the fitted line over the last n iterations
        self.bestx = None     
        
        #polynomial coefficients averaged over the last n iterations
        self.best_fit = None  
        
        #polynomial coefficients for the most recent fit
        self.current_fit = [np.array([False])]
        
        #polynomial coefficients for the most recent fits
        self.recent_fits = [np.array([False])]  
        
        #radius of curvature of the line in some units
        self.radius_of_curvature = None 
        
        #radius of curvature Array of the line in some units
        self.radius_of_curvature_array = [np.array([])] 
        
        #distance in meters of vehicle center from the line
        self.line_base_pos = None 

        # tracking number of missed lines in a row
        self.missCounter = 4 # initialized to 4 to be able to use sliding windows in the first run

        
# Create two instances from this class
left_line = Line()
right_line = Line()
  • Here this function is similar to the one with still images, it does add however an additionnal arguments sliding_windows that decides whether we want to perform a sliding windows search all over again in the full frame, or just search locally around the previous detections.
In [34]:
def detectLanePixels_vid(test_im, sliding_windows, stats=True):
    """
    This function returns the positions of activated pixels in left and right lanes
    and the output image with those pixels painted
    """
    
    # HYPERPARAMETERS
    # Choose the number of sliding windows
    nwindows = 9
    # Set the width of the windows +/- margin
    margin = 100
    # Set minimum number of pixels found to recenter window
    minpix = 50
    
    # Make sure image values are in [0, 255] interval
    if np.max(test_im) == 1:
        test_im = np.uint8(255*test_im)
    
    # Create a histogram for the second half of the image
    histogram = hist(test_im)

    # Create output image to draw on and visualize result
    out_img = np.dstack((test_im, test_im, test_im))

    # peak of left and right part of the histogram are starting points of left and right lanes
    midpoint = np.int(histogram.shape[0]//2)
    leftxbase = np.argmax(histogram[:midpoint])
    rightxbase = np.argmax(histogram[midpoint:]) + midpoint

    # Set height of windows - based on nwindows above and image shape
    window_height = np.int(test_im.shape[0]//nwindows)

    # Identify the x and y positions of all nonzero pixels in the image
    nonzero = test_im.nonzero() # nonzero is a 2-elements tuple
    nonzeroy = np.array(nonzero[0]) # first element is an array of y coordinates of non-zero pixels
    nonzerox = np.array(nonzero[1]) # second element is an array of x coordinates of non-zero pixels

    # Current positions to be updated later for each window in nwindows
    leftx_current = leftxbase
    rightx_current = rightxbase
    
    
    if sliding_windows == True: # use sliding windows approach
        print("--> Using Sliding Windows Approach <--")
        # Create empty lists to receive left and right lane pixel indices
        left_lane_inds = []
        right_lane_inds = []

        # This list is only for a general stat purpose
        activatedPixels_perWindow = []

        # Step through the windows one by one
        for window in range(nwindows):
            # Identify window boundaries in x and y (and right and left)
            win_y_low = test_im.shape[0] - (window+1)*window_height
            win_y_high = test_im.shape[0] - window*window_height

            #Find the four boundaries of the left & right windows on x axis 
            win_xleft_low = leftx_current - margin  
            win_xleft_high = leftx_current + margin 
            win_xright_low = rightx_current - margin  
            win_xright_high = rightx_current + margin

            #Identify the nonzero pixels in x and y within the window
            left_nonzero_pixels = ( (nonzerox > win_xleft_low) & (nonzerox < win_xleft_high)
                                   & (nonzeroy > win_y_low) & (nonzeroy < win_y_high) ).nonzero()[0]

            right_nonzero_pixels = ( (nonzerox > win_xright_low) & (nonzerox < win_xright_high)
                                   & (nonzeroy > win_y_low) & (nonzeroy < win_y_high) ).nonzero()[0]

            #print(len(left_nonzero_pixels), len(right_nonzero_pixels)) # How many non zero pixels indices are on 
                                                                       # the left & right line in current window
            activatedPixels_perWindow.insert(0,[len(left_nonzero_pixels), len(right_nonzero_pixels)])    

            # add the result to the one from previous windows
            left_lane_inds.append(left_nonzero_pixels) 
            right_lane_inds.append(right_nonzero_pixels) 

            # Draw the windows on the visualization image
            cv2.rectangle(out_img,(win_xleft_low,win_y_low), (win_xleft_high,win_y_high),(0,255,0), 5) 
            cv2.rectangle(out_img,(win_xright_low,win_y_low),(win_xright_high,win_y_high),(0,255,0), 5)

            # recalculating the base x-coordinates of left and right windows
            if (len(left_nonzero_pixels) > minpix):
                # Position of activated pixels in current left window
                leftx_pixelpos_current = nonzerox[left_nonzero_pixels]
                leftx_current = np.int(np.mean( leftx_pixelpos_current, axis=0) )
                #print("new left base is ==>", leftx_current)

            if ( len( right_nonzero_pixels ) > minpix):
                # Position of activated pixels in current right window
                rightx_pixelpos_current = nonzerox[right_nonzero_pixels]
                rightx_current = np.int(np.mean( rightx_pixelpos_current, axis=0) )
                #print("new right base is ==>", rightx_current)

        # reorganise indices across all windows
        left_lane_inds = np.concatenate( left_lane_inds )
        right_lane_inds = np.concatenate( right_lane_inds )
        
    
    elif sliding_windows == False: # Use info from previous frames
        print("--> Searching Around previous PolyFits Approach <--")

        margin = 100
        
        # Get left_fit and right_fit of last frame lines instances
        last_left_fit = left_line.best_fit
        last_right_fit = right_line.best_fit

        # Create  lists to receive left and right lane pixel indices by exploring the region next to where
        # last lanes polyfit were found
        left_lane_inds = ( (nonzerox > (last_left_fit[0]*(nonzeroy**2) + last_left_fit[1]*nonzeroy + 
                        last_left_fit[2] - margin) ) & (nonzerox < (last_left_fit[0]*(nonzeroy**2) + 
                        last_left_fit[1]*nonzeroy + last_left_fit[2] + margin)))

        right_lane_inds = ((nonzerox > (last_right_fit[0]*(nonzeroy**2) + last_right_fit[1]*nonzeroy + 
                        last_right_fit[2] - margin)) & (nonzerox < (last_right_fit[0]*(nonzeroy**2) + 
                        last_right_fit[1]*nonzeroy + last_right_fit[2] + margin)))


    

    # get pixel position from pixel indices
    leftx_pos = nonzerox[left_lane_inds]
    lefty_pos = nonzeroy[left_lane_inds]
    rightx_pos = nonzerox[right_lane_inds]
    righty_pos = nonzeroy[right_lane_inds]

    out_img[ lefty_pos, leftx_pos] = [255, 0, 0]
    out_img[ righty_pos, rightx_pos] = [0, 0, 255]
    
    if stats == True:
        print("leftxbase : ", leftxbase, 'rightxbase : ', rightxbase)
        print("Number of non zero pixels are in the whole image", nonzero[0].shape) #how many non zero pixels are in the whole image
        print("Number of non zero pixels indices on the left lane", left_lane_inds.shape) # how many non zero pixels indices are on the left line
        print("Number of non zero pixels indices on the right lane", right_lane_inds.shape) # how many non zero pixels indices are on the right line

    return leftx_pos, lefty_pos, rightx_pos, righty_pos, out_img
In [35]:
def fit_polynomial_toLane_vid(image, last_fit, xm_ym_per_pix = (3.7/700, 30/720), plot_poly=True):
    
    # fit a polynomial into the detected pixels so far
    test_img = test_warped_imgs[5]
    leftx, lefty, rightx, righty, lanePixel_im = detectLanePixels_vid(image, last_fit, stats=False)

    # fitting into a 2nd order polynomial x = f(y)
    #leftfit = np.polyfit( lefty*xm_ym_per_pix[1], leftx*xm_ym_per_pix[0], 2 )
    #rightfit = np.polyfit( righty*xm_ym_per_pix[1], rightx*xm_ym_per_pix[0], 2)

    leftfit = np.polyfit( lefty, leftx, 2 )
    rightfit = np.polyfit( righty, rightx, 2)
    #print("leftfit", leftfit, "rightfit", rightfit)
    # Use the fitting parameters to plot the polynomial on the image
    ## generate y axis base
    width = image.shape[1]
    height = image.shape[0]
    
    ploty = np.linspace(0, height-1, height).astype(int)
    #ploty = ploty*xm_ym_per_pix[1]
    
    ## compute leftx = f_left(y) & rightx = f_right(y)
    plot_leftx = ( leftfit[0]*ploty**2 + leftfit[1]*ploty + leftfit[2] ).astype(int)
    plot_rightx = ( rightfit[0]*ploty**2 + rightfit[1]*ploty + rightfit[2] ).astype(int)
    
    # plot left and right fitting on lanePixel_im
    #if plot_poly == True:
    #plt.plot( plot_leftx, ploty, color='yellow' )
    #plt.plot( plot_rightx, ploty, color='yellow')
    
    #print(ploty)
    #print(plot_rightx)
    for paint in range(-5,5):
        try:
            lanePixel_im[ ploty, plot_leftx+paint] = [255, 0, 0] 
            lanePixel_im[ ploty, plot_rightx+paint] = [255, 0, 0]
        except:
            break
    
    ######### Curvature radius ##########  
    
    ## fit poly in real world space
    y_eval = np.max(ploty)
    xm_per_pix = xm_ym_per_pix[0]
    ym_per_pix = xm_ym_per_pix[1]
    
    leftfit_rw = np.polyfit( lefty*ym_per_pix, leftx*xm_per_pix, 2 )
    rightfit_rw = np.polyfit( righty*ym_per_pix, rightx*xm_per_pix, 2) 
    
    # Calculation of R_curve (radius of curvature)
    left_curverad = ((1 + (2*leftfit_rw[0]*y_eval*ym_per_pix + leftfit_rw[1])**2)**1.5) / np.absolute(2*leftfit_rw[0])
    right_curverad = ((1 + (2*rightfit_rw[0]*y_eval*ym_per_pix + rightfit_rw[1])**2)**1.5) / np.absolute(2*rightfit_rw[0])
    
    left_line.radius_of_curvature_array.append(left_curverad)
    right_line.radius_of_curvature_array.append(right_curverad)
    
    print("Curvature radius Real World - Left =>", left_curverad, "m - Right", right_curverad, "m")
    
    ########## Vehicle position ###########
   
    vehicle_pos = ( plot_rightx[-1] +  plot_leftx[-1] )//2 # midpoint of the two lines base
    
    vehicle_offset = xm_per_pix*( vehicle_pos - width//2 )
    
    #print("Vehicle position -->", vehicle_pos)
    
    print("Vehicle offset -->", vehicle_offset)

    ########## Sanity Check ############
    
    sanity = True

    ## Inconsistent distance between the two lanes across the heigth of the road
    LinesDiff = plot_rightx - plot_leftx
    NbPointsPerSegment = 30
    
    top = sum( LinesDiff[ :NbPointsPerSegment ] )
    middle = sum( LinesDiff[ height//2:height//2 + NbPointsPerSegment] )
    bottom = sum( LinesDiff[ -NbPointsPerSegment:] )
    
    print("Top --->", top, "Middle --->", middle, "Bottom --->", bottom )
    
    curvrad_diff = 2000   
    
    if abs( top - middle ) > curvrad_diff or abs( top - bottom ) > curvrad_diff or abs( middle - bottom ) > curvrad_diff:
        
        print( "---> Distance between lines is not consistent <---")
        leftfit = left_line.best_fit
        rightfit = right_line.best_fit
        left_line.detected = False
        right_line.detected = False
        left_line.missCounter += 1
        right_line.missCounter += 1
        sanity = False 
        
    else:
        
        # Sanity Check successful - current lines to be included
        left_line.detected = True
        right_line.detected = True
        left_line.radius_of_curvature = int( left_curverad )
        right_line.radius_of_curvature = int( right_curverad )
        left_line.line_base_pos = vehicle_offset
        
        left_line.recent_fits.append( leftfit )
        right_line.recent_fits.append( rightfit )
        
        ## The approach of computing Average is subject to improvement
        
        n = 3 # number of frames across which we want to compute the average
        if len(left_line.recent_fits) > n and left_line.missCounter <= 3:
            
            left_line.best_fit = np.average(left_line.recent_fits[-n:], axis=0)
            right_line.best_fit = np.average(right_line.recent_fits[-n:], axis=0)
        else:
            
            # don't use the average
            left_line.best_fit = leftfit
            right_line.best_fit = rightfit
            
        sanity = True
        
    return left_line.best_fit, right_line.best_fit, lanePixel_im, sanity
In [36]:
def putTextToFrame(image, curvature, offset):
    #### Write curvature radius and vehicule offset info to final image
    
    # font 
    font = cv2.FONT_HERSHEY_SIMPLEX 
    # fontScale 
    fontScale = 1.5
    # Blue color in BGR 
    color = (230, 0, 0) 
    # Line thickness of 2 px 
    thickness = 2
    # Using cv2.putText() method 
    msg1 = 'Curvature Radius : ' + str(curvature)+' m'
    msg2 = 'Vehicule Offset : ' + str( round(offset, 2) ) + 'm'
    
    image = cv2.putText(image, msg1, (300, 100) , font, fontScale, color, thickness, cv2.LINE_AA)
    image = cv2.putText(image, msg2, (300, 200), font, fontScale, color, thickness, cv2.LINE_AA)
    
    return image
In [37]:
def advancedLaneFindingPipeline_vid(input_img):
    
    Pipeline = []
    
    # undistort input image
    undistort_img = cal_undistort(input_img, objpoints, imgpoints)
    Pipeline += [undistort_img]

    # create a binary from HLS color space
    hls_binary = to_HLS_Color_Channel(undistort_img)
    Pipeline += [hls_binary]
    
    # create a binary using axis, magnitude and direction gradient on hls_binary
    combined_threshold = combinedGradient_thresholdedBinary(hls_binary)
    Pipeline += [combined_threshold]
    
    # perform perspective transform to bird-eye view
    warped_img = cv2.warpPerspective(combined_threshold, M, img_size, flags=cv2.INTER_LINEAR)
    Pipeline += [warped_img]
    
    # Detect lane pixel on bird-eye view image and generate fitting polynomial
    
    ########### Decide between Sliding Windows or Local search ############
    
    missThreshold = 8
    # Reperform a sliding windows search if accumulated a number of misses above threshold
    if left_line.missCounter + right_line.missCounter >= missThreshold:
        
        sliding_windows = True
        left_line.missCounter = 0
        right_line.missCounter = 0
    
    else:
        
        sliding_windows = False
        
    current_left_fit, current_right_fit, lane_pixel_img, sanity = fit_polynomial_toLane_vid(warped_img, sliding_windows, plot_poly=False)
    
    Pipeline += [lane_pixel_img]
    
    ############ Drawing Fill Poly Region into input frame ################
    
    ## Used code from course material - note: might want to add it as a separate helper function for future use
    
    # Create an image to draw the lines on
    warp_zero = np.zeros_like(warped_img).astype(np.uint8)
    color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
    
    # Recast the x and y points into usable format for cv2.fillPoly()
    # These can be added as attributes hence directly accessible ?   
    ploty = np.linspace(0, image.shape[0]-1, image.shape[0])
    left_fitx = ( current_left_fit[0]*ploty**2 + current_left_fit[1]*ploty + current_left_fit[2] ).astype(int)
    right_fitx = ( current_right_fit[0]*ploty**2 + current_right_fit[1]*ploty + current_right_fit[2] ).astype(int)
    
    pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
    pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
    pts = np.hstack((pts_left, pts_right))

    # Draw the lane onto the warped blank image
    cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
        
    # Warp the blank back to original image space using inverse perspective matrix (Minv)
    newwarp = cv2.warpPerspective(color_warp, Minv, (image.shape[1], image.shape[0])) 
    
    # Combine the result with the original image
    finalresult_img = cv2.addWeighted(input_img, 1, newwarp, 0.3, 0)
    
    finalresult_img = putTextToFrame(finalresult_img, left_line.radius_of_curvature, left_line.line_base_pos )
    
    Pipeline += [finalresult_img]
    


    #PlotComparisonView(lane_pixel_img, finalresult_img, 'Final perspective', colormap = (None, None))
    
    ############ Save Pipeline stages results (just for the writeup ) ################

    # save result into output_images folder
    #for i in range( len(Pipeline)):        
    #    if 1<= i <=3:
    #        cv2.imwrite("output_images/finalResultImgs"+str(i+61)+".jpg", 255*Pipeline[i])          
    #    else:
    #        cv2.imwrite("output_images/finalResultImgs"+str(i+61)+".jpg", cv2.cvtColor(Pipeline[i], cv2.COLOR_RGB2BGR))
            
    return finalresult_img
In [38]:
def process_image(image):
    # NOTE: The output you return should be a color image (3 channel) for processing video below
    # TODO: put your pipeline here,
    result = advancedLaneFindingPipeline_vid(image)
    # you should return the final output (image where lines are drawn on lanes)

    return result
In [39]:
video_output = 'output_videos/project_video_final2.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
clip1 = VideoFileClip("project_video.mp4").subclip(0, 2)
#clip1 = VideoFileClip("project_video.mp4")

left_line = Line()
right_line = Line()

white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(video_output, audio=False)
--> Using Sliding Windows Approach <--
Curvature radius Real World - Left => 870.7296470594531 m - Right 4693.69592320204 m
Vehicle offset --> 0.12685714285714286
Top ---> 20069 Middle ---> 19342 Bottom ---> 19738
Moviepy - Building video output_videos/project_video_final2.mp4.
Moviepy - Writing video output_videos/project_video_final2.mp4

t:   0%|                                                                              | 0/50 [00:00<?, ?it/s, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 869.5606419546223 m - Right 4927.818359817006 m
Vehicle offset --> 0.12685714285714286
Top ---> 20069 Middle ---> 19341 Bottom ---> 19742
t:   4%|██▊                                                                   | 2/50 [00:01<00:43,  1.11it/s, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 825.2191172619653 m - Right 1869.0707404330333 m
Vehicle offset --> 0.111
Top ---> 19831 Middle ---> 19266 Bottom ---> 19496
t:   6%|████▏                                                                 | 3/50 [00:03<00:52,  1.11s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 799.3279408594502 m - Right 2682.4919768384575 m
Vehicle offset --> 0.1162857142857143
Top ---> 19876 Middle ---> 19152 Bottom ---> 19463
t:   8%|█████▌                                                                | 4/50 [00:04<00:57,  1.25s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 732.839528899932 m - Right 866.0716167228234 m
Vehicle offset --> 0.09514285714285714
Top ---> 19175 Middle ---> 19137 Bottom ---> 19333
t:  10%|███████                                                               | 5/50 [00:06<01:00,  1.33s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 728.3847915850165 m - Right 695.0328646904244 m
Vehicle offset --> 0.09514285714285714
Top ---> 18906 Middle ---> 19142 Bottom ---> 19290
t:  12%|████████▍                                                             | 6/50 [00:08<01:01,  1.40s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 718.619639799039 m - Right 801.0685267858809 m
Vehicle offset --> 0.10042857142857144
Top ---> 18877 Middle ---> 18997 Bottom ---> 19268
t:  14%|█████████▊                                                            | 7/50 [00:09<01:01,  1.44s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 734.4822759803052 m - Right 889.6516850490441 m
Vehicle offset --> 0.1162857142857143
Top ---> 19093 Middle ---> 19056 Bottom ---> 19291
t:  16%|███████████▏                                                          | 8/50 [00:11<01:01,  1.47s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 742.0912778879703 m - Right 8118.694971337214 m
Vehicle offset --> 0.1955714285714286
Top ---> 19071 Middle ---> 18837 Bottom ---> 19995
t:  18%|████████████▌                                                         | 9/50 [00:12<01:00,  1.49s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 737.9413933987857 m - Right 3592.4892890871356 m
Vehicle offset --> 0.21142857142857144
Top ---> 19484 Middle ---> 18919 Bottom ---> 20232
t:  20%|█████████████▊                                                       | 10/50 [00:14<01:00,  1.50s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 764.6902874318018 m - Right 1299.282809164946 m
Vehicle offset --> 0.23785714285714288
Top ---> 19704 Middle ---> 18876 Bottom ---> 20448
t:  22%|███████████████▏                                                     | 11/50 [00:15<00:59,  1.52s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 756.5278957389454 m - Right 1414.279762236115 m
Vehicle offset --> 0.21671428571428572
Top ---> 19666 Middle ---> 18769 Bottom ---> 20224
t:  24%|████████████████▌                                                    | 12/50 [00:17<00:58,  1.53s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 883.5667395529613 m - Right 1887.287609661566 m
Vehicle offset --> 0.1955714285714286
Top ---> 20371 Middle ---> 19226 Bottom ---> 20038
t:  26%|█████████████████▉                                                   | 13/50 [00:18<00:56,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 879.6454749742826 m - Right 8924.427020616244 m
Vehicle offset --> 0.16385714285714287
Top ---> 20149 Middle ---> 19192 Bottom ---> 19715
t:  28%|███████████████████▎                                                 | 14/50 [00:20<00:55,  1.53s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 941.3443801123989 m - Right 4079.04323085875 m
Vehicle offset --> 0.15857142857142859
Top ---> 20223 Middle ---> 19142 Bottom ---> 19616
t:  30%|████████████████████▋                                                | 15/50 [00:21<00:53,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 1018.8596402190869 m - Right 2041.7589951566008 m
Vehicle offset --> 0.15857142857142859
Top ---> 19953 Middle ---> 18820 Bottom ---> 19434
t:  32%|██████████████████████                                               | 16/50 [00:23<00:52,  1.53s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 1128.8794779219525 m - Right 1062.871925508915 m
Vehicle offset --> 0.1162857142857143
Top ---> 18520 Middle ---> 18778 Bottom ---> 18955
t:  34%|███████████████████████▍                                             | 17/50 [00:24<00:50,  1.53s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 1177.1192372523678 m - Right 919.6614426682476 m
Vehicle offset --> 0.10571428571428572
Top ---> 18290 Middle ---> 18762 Bottom ---> 18928
t:  36%|████████████████████████▊                                            | 18/50 [00:26<00:49,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 1309.3011473068334 m - Right 1052.1312416786768 m
Vehicle offset --> 0.10571428571428572
Top ---> 18175 Middle ---> 18656 Bottom ---> 18888
t:  38%|██████████████████████████▏                                          | 19/50 [00:28<00:47,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 1394.1654203872586 m - Right 1139.1795736190932 m
Vehicle offset --> 0.10042857142857144
Top ---> 18582 Middle ---> 18856 Bottom ---> 18926
t:  40%|███████████████████████████▌                                         | 20/50 [00:29<00:46,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 1480.5513038366512 m - Right 1240.9718239321408 m
Vehicle offset --> 0.10042857142857144
Top ---> 18845 Middle ---> 18983 Bottom ---> 18962
t:  42%|████████████████████████████▉                                        | 21/50 [00:31<00:44,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 1618.972954313321 m - Right 1645.3774159401935 m
Vehicle offset --> 0.111
Top ---> 19237 Middle ---> 19187 Bottom ---> 19150
t:  44%|██████████████████████████████▎                                      | 22/50 [00:32<00:43,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 1696.9549600274834 m - Right 3400.03327914195 m
Vehicle offset --> 0.12685714285714286
Top ---> 19694 Middle ---> 19389 Bottom ---> 19441
t:  46%|███████████████████████████████▋                                     | 23/50 [00:34<00:41,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 1744.4800430532161 m - Right 4622.8798112262875 m
Vehicle offset --> 0.14800000000000002
Top ---> 20119 Middle ---> 19508 Bottom ---> 19835
t:  48%|█████████████████████████████████                                    | 24/50 [00:35<00:40,  1.55s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 1502.7563357396014 m - Right 11726.668265255656 m
Vehicle offset --> 0.111
Top ---> 19873 Middle ---> 19387 Bottom ---> 19590
t:  50%|██████████████████████████████████▌                                  | 25/50 [00:37<00:38,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 1307.3513252571033 m - Right 7577.453910970568 m
Vehicle offset --> 0.08985714285714286
Top ---> 19782 Middle ---> 19293 Bottom ---> 19555
t:  52%|███████████████████████████████████▉                                 | 26/50 [00:38<00:37,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 1328.9959239408186 m - Right 4353.096517976267 m
Vehicle offset --> 0.10042857142857144
Top ---> 20388 Middle ---> 19494 Bottom ---> 19778
t:  54%|█████████████████████████████████████▎                               | 27/50 [00:40<00:35,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 1184.369326682696 m - Right 2688.2730720664017 m
Vehicle offset --> 0.06871428571428571
Top ---> 20272 Middle ---> 19646 Bottom ---> 19602
t:  56%|██████████████████████████████████████▋                              | 28/50 [00:41<00:33,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 1245.3903753176367 m - Right 3380.360389140688 m
Vehicle offset --> 0.06871428571428571
Top ---> 20833 Middle ---> 19935 Bottom ---> 19682
t:  58%|████████████████████████████████████████                             | 29/50 [00:43<00:32,  1.53s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 1231.899524882505 m - Right 4309.628872345227 m
Vehicle offset --> 0.06342857142857143
Top ---> 20619 Middle ---> 19772 Bottom ---> 19648
t:  60%|█████████████████████████████████████████▍                           | 30/50 [00:45<00:30,  1.53s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 935.5410756376227 m - Right 1505.7495762047677 m
Vehicle offset --> 0.04228571428571429
Top ---> 20065 Middle ---> 19591 Bottom ---> 19610
t:  62%|██████████████████████████████████████████▊                          | 31/50 [00:46<00:29,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 874.267777805315 m - Right 684.2111910839171 m
Vehicle offset --> 0.037000000000000005
Top ---> 18988 Middle ---> 19456 Bottom ---> 19516
t:  64%|████████████████████████████████████████████▏                        | 32/50 [00:48<00:27,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 800.5359789151432 m - Right 809.1495138648613 m
Vehicle offset --> 0.037000000000000005
Top ---> 18733 Middle ---> 19129 Bottom ---> 19493
t:  66%|█████████████████████████████████████████████▌                       | 33/50 [00:49<00:26,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 796.003560485462 m - Right 1057.3167157720616 m
Vehicle offset --> 0.05285714285714286
Top ---> 19155 Middle ---> 19284 Bottom ---> 19752
t:  68%|██████████████████████████████████████████████▉                      | 34/50 [00:51<00:24,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 749.1527031510454 m - Right 7194.423699262847 m
Vehicle offset --> 0.1162857142857143
Top ---> 19113 Middle ---> 19061 Bottom ---> 20356
t:  70%|████████████████████████████████████████████████▎                    | 35/50 [00:52<00:23,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 715.9218967250199 m - Right 4829.710204199772 m
Vehicle offset --> 0.13214285714285715
Top ---> 19221 Middle ---> 18935 Bottom ---> 20463
t:  72%|█████████████████████████████████████████████████▋                   | 36/50 [00:54<00:21,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 692.0196873238029 m - Right 1291.2146622990851 m
Vehicle offset --> 0.15857142857142859
Top ---> 19262 Middle ---> 18656 Bottom ---> 20596
t:  74%|███████████████████████████████████████████████████                  | 37/50 [00:55<00:20,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 677.8059439290687 m - Right 7972.883602544621 m
Vehicle offset --> 0.09514285714285714
Top ---> 19100 Middle ---> 18703 Bottom ---> 19852
t:  76%|████████████████████████████████████████████████████▍                | 38/50 [00:57<00:18,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 630.4410696914505 m - Right 1349.9904115363054 m
Vehicle offset --> 0.12685714285714286
Top ---> 19441 Middle ---> 18428 Bottom ---> 20107
t:  78%|█████████████████████████████████████████████████████▊               | 39/50 [00:58<00:16,  1.53s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 610.603973313754 m - Right 7557.4530797736525 m
Vehicle offset --> 0.07928571428571429
Top ---> 18961 Middle ---> 18407 Bottom ---> 19584
t:  80%|███████████████████████████████████████████████████████▏             | 40/50 [01:00<00:15,  1.53s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 610.0809337949574 m - Right 6458.772110394526 m
Vehicle offset --> 0.07400000000000001
Top ---> 19241 Middle ---> 18495 Bottom ---> 19474
t:  82%|████████████████████████████████████████████████████████▌            | 41/50 [01:01<00:13,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 611.0296365441657 m - Right 723.001942924514 m
Vehicle offset --> 0.04228571428571429
Top ---> 18431 Middle ---> 18658 Bottom ---> 19149
t:  84%|█████████████████████████████████████████████████████████▉           | 42/50 [01:03<00:12,  1.55s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 590.4816273390518 m - Right 617.4791530576167 m
Vehicle offset --> 0.04757142857142857
Top ---> 18256 Middle ---> 18685 Bottom ---> 19160
t:  86%|███████████████████████████████████████████████████████████▎         | 43/50 [01:05<00:10,  1.54s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 593.1414595859297 m - Right 667.6907593545466 m
Vehicle offset --> 0.05814285714285715
Top ---> 18581 Middle ---> 18787 Bottom ---> 19180
t:  88%|████████████████████████████████████████████████████████████▋        | 44/50 [01:06<00:09,  1.57s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 627.3085965910724 m - Right 725.588712506332 m
Vehicle offset --> 0.06342857142857143
Top ---> 18921 Middle ---> 18941 Bottom ---> 19199
t:  90%|██████████████████████████████████████████████████████████████       | 45/50 [01:08<00:07,  1.56s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 620.2281661576845 m - Right 808.2596917049432 m
Vehicle offset --> 0.07400000000000001
Top ---> 19260 Middle ---> 19053 Bottom ---> 19287
t:  92%|███████████████████████████████████████████████████████████████▍     | 46/50 [01:09<00:06,  1.58s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 644.2353141808427 m - Right 1287.1531399098671 m
Vehicle offset --> 0.111
Top ---> 19379 Middle ---> 19057 Bottom ---> 19628
t:  94%|████████████████████████████████████████████████████████████████▊    | 47/50 [01:11<00:04,  1.57s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 669.5869539108704 m - Right 2364.511365128562 m
Vehicle offset --> 0.12685714285714286
Top ---> 19667 Middle ---> 19127 Bottom ---> 19834
t:  96%|██████████████████████████████████████████████████████████████████▏  | 48/50 [01:12<00:03,  1.58s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 659.1031889538101 m - Right 33286.89121677854 m
Vehicle offset --> 0.1532857142857143
Top ---> 19669 Middle ---> 18984 Bottom ---> 20025
t:  98%|███████████████████████████████████████████████████████████████████▌ | 49/50 [01:14<00:01,  1.56s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 725.9490250434056 m - Right 3850.2687606984537 m
Vehicle offset --> 0.15857142857142859
Top ---> 20159 Middle ---> 19124 Bottom ---> 20011
t: 100%|█████████████████████████████████████████████████████████████████████| 50/50 [01:16<00:00,  1.56s/it, now=None]
--> Searching Around previous PolyFits Approach <--
Curvature radius Real World - Left => 763.0398030737655 m - Right 2315.3501734651395 m
Vehicle offset --> 0.15857142857142859
Top ---> 20212 Middle ---> 19064 Bottom ---> 19969
                                                                                                                       
Moviepy - Done !
Moviepy - video ready output_videos/project_video_final2.mp4
Wall time: 1min 19s
In [40]:
HTML("""
<video width="960" height="540" controls>
  <source src="{0}">
</video>
""".format(video_output))
Out[40]: